List of AI News about DeepSeek V3
| Time | Details |
|---|---|
|
2026-03-18 10:08 |
SkillNet Breakthrough: 200,000 Reusable Skills Boost Agent Performance by 40% Across DeepSeek, Gemini 2.5 Pro, and o4 mini
According to God of Prompt on X, Zhejiang University, Alibaba, Tencent, and 15 partner institutions introduced SkillNet, a shared library of 200,000+ reusable skills that any AI agent can call to avoid relearning each session. As reported by the X post, tests on DeepSeek V3, Gemini 2.5 Pro, and o4 mini across three environments showed an average 40% reward improvement and 30% fewer execution steps versus baselines, with immediate skill transfer requiring no retraining or parameter updates. According to the post, the repository includes 150,000+ curated skills evaluated on safety, completeness, executability, maintainability, and cost. If verified in broader benchmarks, this infrastructure could cut agent operating costs, shorten development cycles for autonomous workflows, and enable cross-model capability sharing for enterprise automation. |
|
2026-01-03 12:47 |
MoE vs Dense Models: Cost, Flexibility, and Open Source Opportunities in Large Language Models
According to God of Prompt on Twitter, the evolution of Mixture of Experts (MoE) models is creating significant advantages for the open source AI community compared to dense models. Dense models like Meta's Llama 405B require retraining the entire model for any update, resulting in high costs—over $50 million for Llama 405B (source: God of Prompt, Jan 3, 2026). In contrast, DeepSeek's V3 MoE model achieved better results with a lower training cost of $5.6 million and offers modularity, allowing for independent fine-tuning and capability upgrades. For AI businesses and developers, MoE architectures present a scalable, cost-effective approach that supports rapid innovation and targeted enhancements, widening the gap between dense and modular AI models for open-source development. |
